Duke University associate professor Amanda Randles’ work to simulate and understand human blood flow demonstrates how high-performance computing paired with scientific principles can help improve human health. In this conversation, Amanda talks about how she brought together early interests in physics, coding, biomedicine and even political science and policy and followed her enthusiasm for the Human Genome Project. She discusses how supercomputers are pushing the boundaries of what researchers can learn about the circulatory system noninvasively and how that knowledge paired with wearable device data could lead to new ways to monitor and treat patients. She also talks about her public engagement and science policy work and its importance for educating patients and supporting computational science’s future.
You’ll meet:
- Amanda Randles is the Alfred Winborne and Victoria Stover Mordecai associate professor of biomedical sciences at Duke University and director of Duke’s Center for Computational and Digital Health Innovation. Her research using high-performance computing to model the fluid dynamics of blood flow has garnered awards including one of the inaugural Sony Women in Technology Awards with Nature , the 2024 ISC Jack Dongarra Early Career Award and the 2023 ACM Prize in Computing. She was a Department of Energy Computational Science Graduate Fellowship (DOE CSGF) recipient from 2010 to 2013 and a Lawrence Fellow at Lawrence Livermore National Laboratory from 2013 to 2015,
- Follow Amanda on social media: LinkedIn, BlueSky and Instagram

From the episode:
Amanda talked about her early scientific influences starting with learning how to program computers in high school. While studying physics, computer science and political science as a Duke University undergraduate, she participated in a biotechnology focus program that allowed her to tour the J. Craig Venter Institute and a lab at the National Institutes of Health.
Before she went to graduate school, she worked at IBM with the Blue Gene supercomputer, a system originally designed to work on biological research problems. As a Ph.D. student, she developed HARVEY, the code that she and her team continue to use in developing 3D, patient-specific blood-flow simulations.
Her work often requires the use of the DOE’s leadership class supercomputers including the recently decommissioned Summit at Oak Ridge Leadership Computing Facility and Aurora, the exascale system at Argonne Leadership Computing Facility.
We discussed Amanda’s work on policy in high-performance computing, including this report in Science (subscription required for full article) that she co-authored with six other computational scientists.
A condensed version of this conversation is available on the DEIXIS magazine website and in the 2025 issue of the magazine.
Additional reading:
- Diagnostic Performance of Coronary Angiography Derived Computational Fractional Flow Reserve, Journal of the American Heart Association: Amanda discussed her team’s work simulating fractional flow reserve, a measure that helps cardiologists place stents to alleviate blood vessel blockages.
- Establishing the longitudinal hemodynamic mapping framework for wearable-driven coronary digital twins. npj Digital Medicine: Amanda also described research toward building digital twins to model blood flow and their work that simulated more than 4.5 million heartbeats.
Related episodes:
- Pushing Limits in Computing and Biology: Amanda was a podcast guest in 2022 as part of a conversation about big computing, COVID-19 and much more with another former CSGF recipient Anda Trifan, now a scientist at GSK.
- Joe Insley: Big Data to Beautiful Images: In this episode, Amanda mentioned the importance of in situ visualization with her current simulations on Aurora. Her team’s work on Aurora was one of the examples that Joe Insley mentioned in our Season 3 conversation about scientific visualization at Argonne Leadership Computing Facility.
Featured image of blood flow credit: Joseph A. Insley/Argonne Leadership Computing Facility
Transcript
Sarah Webb 00:04
This is Science in Parallel, a podcast about people and projects in computational science, and I’m your host. Sarah Webb. For this episode, we are pausing our series on foundation models for an episode about using powerful computers to understand blood flow. I’m speaking with Amanda Randles about her work on building digital twins, personalized models of patient circulatory systems that incorporate data from smartwatches and other wearable devices. Amanda is an associate professor of biomedical engineering at Duke University. I’ve spoken with her several times over her research career. During Amanda’s Ph.D. at Harvard University, she started working on HARVEY, a computational model of blood flow that she and her research team at Duke continue to improve to this day.
Sarah Webb 01:00
This interview is a longer audio version of a conversation that Amanda and I had earlier this spring, and appears in the 2025 issue of DEIXIS magazine. Amanda was a Department of Energy Computational Science Graduate Fellow from 2010 to 2013. We often refer to that program as CSGF for short. It supports this podcast, and its fellows are meeting this week in Washington, DC, as part of their annual program review. Amanda will be giving a keynote talk about her research. Join us to hear more about how she got her start, the latest on her blood flow research and why she engages with the public and works on science policy.
Sarah Webb 01:52
Amanda, it is great to have you back on the podcast.
Amanda Randles 01:56
Yeah, thanks for having me. I’m excited to be back.
Sarah Webb 01:59
One thing I’ve always found interesting about you and your scientific interests is that your training is in computer science and physics, but the questions you work on are very much about biology and medicine. How did you get interested in this intersection, particularly considering that your core background and training is really more in the physical sciences?
Amanda Randles 02:21
Yeah, so I guess I get started when I was in high school. I went to a math and science high school. So it was a math and science center where you spent half of your day, every day outside of your high school with people doing programming. And that’s where I first got exposed to computer science, learned how to program. We started with Pascal, and then we kind of like, moved on to C and that was kind of where, you know, first got excited about what you can do with computer programming. But I really liked the scientific classes as well. I got excited about what you can do on the biological side. Like, what are the biological questions? But I liked taking the physics approach and, like, really understanding things from, you know, first principles, fundamentals.
Amanda Randles 02:57
So when I went to Duke for undergrad, I knew I wanted to study physics, like I really liked the just like really understanding the nuts and bolts down to first principles of how does something work. I was planning on doing international relations. It was right around the time when the human genome project had had come out. So I was interested in how science affected policy. At that stage, I was planning to double major in political science, or international relations and physics.
Amanda Randles 03:21
As a first year, some of my friends needed help with some of their coding classes. I spent a lot of time just helping tutor them and working with them, and realized how much I missed the computer science side. And I’m happy to stay up all night and pull an all-nighter coding. And I realized how much I missed that and how I really that was definitely something I enjoyed and wanted to do, so I kind of switched over. I had a double major in computer science and physics. I moved it to a minor in political science.
Amanda Randles 03:45
And then in my first year, I was taking a biotech focus program. So some of our classes were in biotechnology, and then some were in the ethics and that side of it. And because it was right around the Human Genome time, they actually took us to the Craig Venter Institute. And they took us to go visit an NIH lab. And one of the big things that was coming out was like, how much data they were collecting from all the genome experiments. And they were kind of at that stage. This is like back in 2005, 2006 where they were starting to sequence the genomes, but had so much data that you were trying to figure out, what do we do with all this data? How do we analyze it? And it started to become clear how much computer science could really affect answering biological questions. I got really excited about the role computer science could play in biomedical questions.
Amanda Randles 04:30
I joined one of the genetics labs in my freshman year to start doing research, but all of the work… They did try to get me to do some of wet lab experiments. I was not very good at it, and it was not something I was very excited about. So every time they tried to get me to do those experiments, there was other researchers that would let me start playing with like Perl and the bioinformatics research. And I kind of kept spending all of my time with the researchers doing the bioinformatics side. And because, you know, wet lab wasn’t really my thing, it allowed me to get involved with biomedical data biomedical questions, but from like, the computer science angle. I stayed in that lab for several years. I was always kind of involved with, like, the biomedical questions.
Amanda Randles 05:09
I worked in a biomedical optics lab for a little while, but with that, I was always doing the computational analysis of the data that we were getting from those experiments. When I graduated, I was trying to decide if I should go into grad school or get a job, and then I was really lucky where I was offered a position at IBM on the Blue Gene supercomputer. And you know, they were using the Blue Gene supercomputer to study biomedical problems at a scale that you couldn’t do otherwise. At the time, I was like, if I were to go to get a Ph.D., I would want this as my dream job afterwards, I’m being offered it now. I should just go take it and see what’s happening. I spent a few years at IBM working on the Blue Gene team, and I had never really worked with parallel computing before that, so that was really the first time I saw what a supercomputer was, how you could use it. And then that kind of prompted me to want to go back to graduate school and focus on like, how do we really use these supercomputers, and how do we move it into something from the biological standpoint. At the time, I had talked to Tim Kaxiras who is a professor at Harvard, and he was doing a lot of different studies in the biological space, but from the physics perspective, and using the models to understand biological problems, and then using large scale supercomputers. So it’s a really good blend of all of my interests.
Sarah Webb 06:18
So let’s fast forward to today. What are the main questions that your group is focusing on right now?
Amanda Randles 06:23
We’re still using Harvey, which is like the code I’ve been working on since the Kaxiras lab at Harvard, but I developed it there. I’ve kind of brought it through Lawrence Livermore, and then back here at Duke. We’re using Harvey to create three-dimensional blood-flow simulations that are patient-specific. So a lot of the work lately has kind of changed into not just personalized flow models, but really building a digital twin of specific patients understanding their blood flow. We spent a long time, and probably the last time I talked to you, we were looking at really validating the model so trying to replace invasive measurements. So one of the canonical examples has been for heart disease, you put a diagnostic guide wire in someone’s artery, and you measure the pressure across a stenosis, or like, a lesion where, like, you might need a stent. And it’s the pressure gradient across that stenosis that determines if you need a stent or if you don’t. We developed this computational tool so you can make a 3D model of the patient’s heart, run a blood-flow simulation of their blood through that 3D model. And then, instead of putting the guide wire in the patient, you measure the pressure gradient in that 3D virtual model. And then you don’t have to have an invasive procedure, you just need the imaging and some data that we can collect noninvasively.
Amanda Randles 07:34
We run a patient-specific flow simulation, and we conducted a large trial between Brigham and Women’s Hospital in Boston and then Duke University. We compared the results from the simulation for 200 patients to the results with the guide wire and showed that you can accurately measure the flow and actually get this measure, which is called fractional flow reserve, to determine non invasively, if the patient needs a stent or doesn’t need a stent. So that’s the basis of we had an accurate model for understanding a patient’s flow, simulate a patient’s hemodynamics and their blood flow. And what we’ve been interested in lately is we have kind of two major directions of connecting these blood flow simulations to wearable devices, so trying to not just understand your blood flow at one single time point for that diagnostic metric but provide a remote way of monitoring how you’re doing as you go about your daily life. And how do we move away from single timepoint metrics to can we watch someone’s blood flow over the next six months or over the next year and see if we can identify when someone’s going to have heart failure or when they’re going to have an issue before it happens, kind of like a check-engine light for the heart and help the doctor be able to act proactively and bring them in.
Amanda Randles 08:45
So there’s a lot of computational challenges there. It took the world’s biggest supercomputer just to model a single heartbeat previously, and now we’re trying to say, we want to do six months of, you know, this has been like, algorithmic challenges, computational challenges. We’re kind of like, that’s the area we’ve been focused on. At this state, we’ve been able to model six weeks worth of time, which is about 4.5 million heartbeats. The previous record had been about 30 heartbeats. So we’ve really developed a lot to expand that and make this possible, to start looking at the remote modeling.
Amanda Randles 09:15
And then the other end, we’re trying to understand, you know, not just going in longer time periods, but higher resolution and scales. So we’re trying to go down to, you know, single cell model. Understand why is a cancer cell interacting with the wall in the way that it is? So we’ve added adhesion models, where we go all the way down to, like, individual ligand receptors, of like, how is that cancer cell adhering to different parts of the body? The key problem is, if you want to look at a large region of the body and all of the red blood cells, that takes a lot of computational power. Even using the world’s biggest supercomputers, you can get about a three-millimeter-cubed space and get all of the red blood cells in that space. And that is using the entirety of Summit, which was Oak Ridge’s largest supercomputer a few years ago. And that got us a three-millimeter cube.
Sarah Webb 10:01
That’s tiny.
Amanda Randles 10:02
We would like to look at much larger regions. So we had to develop new algorithms to allow, you know, capturing cellular scale, but over centimeters or even a meter distance. It was a lot of mathematical and algorithmic changes, physics-based changes, but then, you know, allowing us to capture down to the like ligand receptor pairings, but then capture the cell behavior and look at that adhesion over 10 centimeters. And like trying to get to that so a lot of a lot of our work now is like, can we extend the time domain we’re looking at and the scale and resolution that we’re looking at, but over a larger space in that side?
Sarah Webb 10:35
Let’s back out a little bit and talk about, you know, digital twins themselves. I guess part of what I’m interested in is how close are we to applying the study that you just did where you compared the digital twin versus the current standard of care. How far are we from basically switching people over to this less invasive type of procedure and having that be routine
Amanda Randles 11:00
For the study that I mentioned, not our tool at this stage, but there are similar tools out there that are already FDA-approved and being used in the clinic regularly. So for a single time-point metrics for certain diseases like particularly for a fractional flow reserve, there are several FDA-approved devices being used for noninvasive measurement using digital twins or personalized models for these kinds of flow simulations to decide if you need to have a stunt or not. They vary from like, one-dimensional models to zero-D models, and they also have some, like, 3D heavy flow simulations, and they’re already being used in the clinic. There are even a few tools that are being used to predict for treatment planning, and we’ve been working on this as well, where it’s like using machine learning to say, if I were to put, you know, stent X or stent Y in, how would that affect the flow for that patient? Or if there are multiple lesions in the patient, which one should I be stenting? So there are a few approved tools already being used for like, some of the treatment planning that’s mainly in the heart.
Amanda Randles 11:59
We need to see that in other areas of the body and the vasculature. So it’d be great to be able to use that in the leg or in the brain and other spaces. It’s less mature in some of those spaces. Or at least, the translation of it into the clinic hasn’t been seen as much. But a lot of what you see in the clinic today is really tied to a single heartbeat, a diagnostic metric, which also, honestly, you know, sometimes we call a digital twin, it’s kind of on the border of like, is it really just a personalized computational model? I think if these models start to become digital twins when they’re connected and you’re getting real-time feedback from the patient, and you’re tying the physical model to the virtual representation, and it’s not a disjoint process. And that’s when it becomes a digital twin versus just a personalized model, and that’s where tying it to the wearables is really moving us to this idea of the digital twins. And I think we’re like, really starting to see that happen, but that’s still in the research phase.
Amanda Randles 12:51
And I think we’re still a few years away, and there’s more studies that need to happen, because for studies like the vascular digital twins, we’ve only just been able to model longer flow periods, and now we need to do all the clinical studies to show that it’s accurate over time. We’ve never been able to access these fluid-dynamic metrics continuously over long periods for these patients. So I think there’s even, like, new markers that we don’t know are going to predict, like heart failure. But we need to do all the studies, the discovery to find out what are these markers and then show that they can be valid in the clinic. So all those kinds of studies are in the process.
Sarah Webb 13:27
These are huge simulations. And you were talking about how important supercomputers have been to how you think about problems, and the problems that you’ve worked on, and you’ve been a part of the early work with Aurora. So what makes supercomputers such as Aurora essential for your work, and what have you been able to do on that system so far?
Amanda Randles 13:49
The discovery phase is really important right now. So we don’t necessarily know what the biomarkers are. We don’t know what we’re looking for. And so with that, we need to collect everything and then figure it out. The goal would be, you know, you can create a reduced-order model, or, like in today’s world, we can now create these AI surrogates, so that when you translate it to the clinic, you don’t need a big supercomputer for every single person. But until we know what we’re looking for, we need to run incredibly high resolution simulations. We need this kind of capability for us. The first time we ran a coronary study, it was like a 10-micron resolution. So at the resolution of a red blood cell, it took the entirety of Argonne’s biggest supercomputer back in 2010,which is like 140,000 processors, but it took us six hours on the whole system just to run one heartbeat.
Amanda Randles 14:38
It is important to not just capture one vessel of the coronary arteries. We know that having the side branches and the entire geometry is really critical to understanding the flow patterns and what’s going on. So we can’t just limit the geometry, but we also need a really high resolution, and we want to be able to capture it over time, because the pulsatility of the flow and how it changes is incredibly important. But how often are you gonna get the entire supercomputer? So even just waiting for the supercomputers to get bigger really isn’t enough. In 2015 with the Livermore system, we were able to go to the level of like the full body, but it required the entire supercomputer, and it wasn’t even the full body. It’s like the scale of the full body, but it’s all vessels that were one millimeter in diameter and above, so you’re not even capturing anything below that, and that still required the bulk of the supercomputer.
Amanda Randles 15:27
Now with Aurora, we can run that full-body simulation only on like a smaller portion of Aurora, which really allows us to run many of those simulations and do more exploratory science. Of like, if you’re going to change heart rate, or change the heart-rate variability, or other characteristics that we know are associated with disease development, can we do these parameter studies and see how they affect the pressure in the patients and really try to understand what’s going on with that patient?
Amanda Randles 15:50
If we want to move to the digital twin side, where we’re trying to capture six months of time, we need tons of compute hours, and then on the cellular components. We’re now, with Aurora, we’ve been able to model at least a few billion red blood cells. It’s only a small fraction of the red blood cells in your body, but it’s a huge amount to really understand the interaction of billions of cells at a time. And really, you know how a cancer cell might be moving through the body, where it’s interacting with the red blood cells where it’s moving, is really critical to capture these cell-cell interactions, and Aurora is allowing us to fully deformable red blood cells that are adhesive and really mimic the physiology of the system without having to make as many assumptions.
Amanda Randles 16:34
So it’s been really helpful, but it’s very computationally intensive, and then it’s also resulting in tons and tons of data– like terabytes to petabytes of data being created for all of these simulations. What do we do with this data? We’ve been working with the Argonne team very closely of just, how do you visualize that amount of data? So we’re trying to a lot of like in situ visualizations, so trying to visualize the cells while everything is still in memory on Aurora and like ways to access that, because you can’t really download a petabyte of data, take it to another type of system and try to visualize it there. So there’s a lot of questions, aside from just, how do you make the simulation, but then how do you just analyze the data that comes out of that simulation?
Sarah Webb 17:11
Within your team, I mean, it sounds like there are so many biological questions that you’re thinking about the fundamental fluid dynamics of what’s going on.
Amanda Randles 17:20
Yeah.
Sarah Webb 17:21
The algorithmic questions, the visualization questions. Within your team, do you have people who kind of gravitate towards certain problems? With something that’s so interdisciplinary, how does that come together?
Amanda Randles 17:35
It’s kind of amazing. Of like, the people on my team are really like, they’re very ambitious. They’re willing to take on hard problems. They’re all incredibly smart. They all have to know a little bit of all of the pieces. You can’t just have, like your fluid person who then talks to the computer science person. I always feel bad because, like when they join the lab, they have to learn the Lattice Boltzmann method that we use. They have to learn a little bit of high-performance computing. Most people are all in biomedical engineering, so they’re doing all of this to answer a biomedical question. So they need to also get to that stage, so they have all of this base knowledge.
Amanda Randles 18:06
And then different people in the lab, as you’re saying, like, kind of gravitate to different pieces. So we have one person who is, while his thesis is really focused on adhesion and, like, cellular interaction, he’s also the person who has ported our code to every imaginal programming model, and is, you know, amazing on the parallel computing side, and is testing the efficiency. And then along the way is getting, like, supercomputing papers on the portability and efficiency of the code, while trying to do all of this so that he can answer a question about cell adhesion down the line.
Amanda Randles 18:37
So the students are very interdisciplinary, and they kind of bring all those skillsets, and they kind of focus on making these computational advances so that they can answer their biomedical question. We have people that end up focusing on different areas. The lab is kind of separated into key project areas, and we kind of meet in like mini-teams to get these pieces together. But then in the group meeting, we have a part of the meeting where you’re kind of reporting back from every team, and as part of it’s like, what are the points of discussion that other people need to know of? So if you developed a new technique or something new you can measure, a way of speeding up the code that might be useful. If you’re on the cell team, but it’d be helpful for the heart failure team and kind of report that back in the group meeting and share between. So if you’ll notice, like my publications, there’s never just me and one person. It’s a very team-based science and works really well.
Sarah Webb 19:21
So I’m gonna shift gears a little bit, because you’re definitely out there talking about your work. You’re active on social media, you’ve done interviews with science journalists, and you’re participating in policy discussions about the future of computing. Why are these conversations beyond the lab important?
Amanda Randles 19:39
It’s really critical for people to understand, you know, the kind of work that we’re trying to do, and, like, the role that it can play, and why, kind of on multiple fronts. There’s a lot of work with AI and digital twins and all that in the biomedical space, and it’s important for people to understand, like, what’s really happening, what’s coming from the experts. How are we using it? Understanding the impact on security, the impact of, like, how is this really going to help you? What is the benefit of this kind of research? And then, what can you tangibly expect to see? Like you were asking earlier, like, what’s going to happen in the next few years? What is the difference between science fiction, what we think is going to happen and like, what is really going on?
Amanda Randles 20:16
And kind of helping people understand what’s available now. What can you make use of now, even just in terms of, you know, the data we get from wearables, the data that you can access on your own. I think, as we’re increasing from the biomedical side, we’re kind of at a shifting point with wearables and with the way patients now have more access to their data and they can, you know, see what’s happening with their physiological state. We’re kind of moving from the point where all of your information is owned by the doctor and in your medical records. It’s more empowering for the patients. It’s important for the patient to really understand, what are those opportunities, and how do you actually make use of this data and understand what’s out there?
Amanda Randles 20:56
It’s critical on that side, on the HPC front, there’s obviously a lot going on with, you know, AI development. We’ve kind of come to a world where, like, everybody even actually knows what a GPU is, which is pretty cool, but not everybody understands what are the nuances of what does that mean? One of the issues we have is there’s a lot of development for AI-based GPUs, which is fantastic, but potentially a lot of like, the architecture that’s being created to really support some of these AI algorithms may not be what we need to support physics-based modeling, large-scale, high-performance computing. And we kind of need to figure out, like, what is the path to kind of maintain those types of specialized hardware, but also having come in a way that is going to be useful for these large scale models. And why these large scale models are important, what we can get out of them, and making it clear.
Amanda Randles 21:46
We used to have through the DOE, the big exascale project, which really helped us move forward technology for the space, not just the hardware, but the software and training, training for people to be able to use these systems. We don’t have a plan going forward after the exascale project. So a lot of it is, you know, what do we do from here, and how do we maintain that and improve on it? How do we build a workforce that can use this technology and use it intelligently? I think it’s an important conversation to be having with policy makers, with the public and kind of informing on all ends.
Sarah Webb 22:18
You were a co author with a number of other computational scientists in this recent policy article in science about the need for these long term plans and technology leadership in HPC. What right now would you say are the critical needs in this area?
Amanda Randles 22:35
My involvement with this article came out of one of the committees I was helping lead through the DOE. We were trying to look at, you know, what is the future for the large systems? What’s the path for ASCR? What is their like 10 to 15 year plan, and where would they be going?
Sarah Webb 22:49
ASCR is the acronym, A, S, C, R, and stands for the Advanced Scientific Computing Research program within the U.S. Department of Energy’s Office of Science.
Amanda Randles 23:01
And every conversation we had, it kind of just kept coming back to, it’s great that, you know, we’re trying to get this plan in place, but we need so much more than just the hardware. There’s so much more that needs to happen. And we kept having all of these conversations, and it kind of snowballed into, well, why don’t we just write this other article? The key piece is, there’s just so much that needs to happen. It’s not going to be able to be funded by just the Department of Energy creating a large supercomputer and putting it out there. We need a whole government approach. It’s this interdisciplinary problem that needs to be solved and needs to be attacked from all angles, like we need the workforce that is going to be trained to be able to use these systems, but we need systems that are going to be able to address pressing problems for the nation and understand what those problems are.
Amanda Randles 23:44
It’s not all just AI, and it’s what kind of models do we need to build, and then what kind of systems can actually support answering those questions. It’s not just health and digital twins, but it’s like we have different architectural needs than you might have for other types of workloads, and we need to assess. We need to be able to have a plan for how are we going to ensure strong development of new technology, not just the chip, but also the interconnects, the entire system. But it’s also really important to have support for the software stacks, and not just like the application level, but like all the way through. And we really need to have — the exascale program really supported the software stacks and development of like the libraries that we need and the underlying and like the mathematical libraries, and like the support on that end. And we need sustained support for those kinds of libraries and that kind of software support to make those usable and kind of modify to all the new architectures that come out. And currently there really isn’t a clear plan in place for how are we going to have long-term, sustained support for development on all of those fronts? And I think that it’s critical that we have these conversations, that we plan early and that we start trying to figure this out.
Sarah Webb 24:55
You’ve been talking about workforce. How has the DOE CSGF influenced your career?
Amanda Randles 25:01
CSGF, I think, is amazing, and it had a huge impact. I’m still heavily involved with the CSGF itself, but I’d also say a lot of my collaborators, a lot of my mentors. I’ve gotten so much out of CSGF, aside from just like the original funding that gave you that flexibility to do the research you really wanted to do. I’m not sure I would have been able to develop HARVEY in grad school and have that flexibility because it was very much a side project for my advisor. I think having CSGF in the first place gave me flexibility to pursue the research I was excited about. And then I was able to do my practicum at Livermore, which I then went back and did my postdoc at Livermore, and it really set up those interactions. And I got to meet, introduced me to the mentors, and allowed me to pursue research and set up research programs I don’t think I would have otherwise been able to do.
Amanda Randles 25:48
When I was a grad student, I had many mentors that helped me figure out the next stage of my career that really came from the CSGF program. Even still, I go to people like David Keyes, who’s heavily involved with CSGF, and ask him for career advice. He still writes letters for me because of our first interactions at CSGF. It really helps you have a network of people outside of your graduate program and introduces you from your cohort to the alumni and like the advisors involved. It just creates a real community. as opposed to other fellowships where you just get funding, they get you together every summer, you really bond. You get to know each other. It’s been a really fantastic experience for me. Right now, we have a virtual seminar series that we’re setting up at Duke. We have a new center at Duke focused on computational and digital health. We’ve had a lot of like former CSGF speakers coming in, and there’s a lot of connection with CSGF through that.
Sarah Webb 26:42
What pieces of advice would you give to early career researchers, say, current CSGF fellows, other up and coming computational scientists?
Amanda Randles 26:53
I think there’s a lot. Especially having just mentioned David Keyes, it was, like, be comfortable, especially like, within CSGF and other like, other places, reach out and ask for help, and, like, ask people for advice. I know, like, we’re all busy, and we may not be able to respond. And don’t be offended if people don’t respond, but it’s worth asking. Like, I know, when I was in grad school, I was trying to figure out what to do after my Ph.D., and I was very stressed out. David happened to be giving a talk at Harvard that day, and I just, like, emailed him and was like, I’m really stressed. I need some help. What do I do? And he met with me after his talk, and in the meetings with David, he actually, like, helped introduce me to people who helped.
Amanda Randles 27:31
I had job interviews based on that discussion. He’s been, you know, a resource of bouncing ideas off of him and trying to figure out career steps. And it’s just feeling comfortable to reach out to people and ask for help, because we’ve been through a lot of these things. We all understand that it’s stressful. I think the job market is even getting more stressful. Ask for help every time I’ve been changing jobs and doing things, I’ve used alumni networks, through CSGF, or through the universities I’ve been at, to try to find jobs, reach out for informational interviews. And just it’s okay to ask for help, and people have been through it.
Amanda Randles 28:03
And then, on the research front, I just said, don’t kind of bend to what should be funded. Think about like, what really motivates you, what you’re most passionate about, and kind of be willing to take big risks and try crazy projects. Because I think a lot of the more fun projects are the ones that are really like at these interdisciplinary boundaries and are high-risk, high-reward projects. And kind of might seem crazy at a first pass, but a lot of interesting things can come out of it. Keep trying, and don’t be afraid of failure. And you know, failure is definitely a key part of this, and we all have it all the time. And just be willing to take a risk and try different projects.
Sarah Webb 28:40
Great. Amanda, it was a pleasure, as always. Thank you so much.
Amanda Randles 28:44
Thank you
Sarah Webb 28:44
To follow Amanda Randles and her lab on social media and to learn more about the people and projects mentioned in this episode, please check out our show notes at scienceinparallel.org. Amanda was also a guest on the podcast in 2022. I spoke with her and another former CSGF recipient Anda Trifan about the interface of computing with biomedical research, including their work on COVID-19.
Sarah Webb 29:16
Science in Parallel is produced by the Krell Institute and is a media project of the Department of Energy Computational Science Graduate Fellowship program. Any opinions expressed are those of the speaker and not those of their employers, the Krell Institute or the U.S. Department of Energy. Our music is by Steve O’Reilly. This episode was written and produced by Sarah Webb and edited by Susan Valot.
Transcript prepared using otter.ai with light editing.